machine learning and signal processing
Fault Diagnosis on Induction Motor using Machine Learning and Signal Processing
Samiullah, Muhammad, Ali, Hasan, Zahoor, Shehryar, Ali, Anas
The detection and identification of induction motor faults using machine learning and signal processing is a valuable approach to avoiding plant disturbances and shutdowns in the context of Industry 4.0. In this work, we present a study on the detection and identification of induction motor faults using machine learning and signal processing with MATLAB Simulink. We developed a model of a three-phase induction motor in MATLAB Simulink to generate healthy and faulty motor data. The data collected included stator currents, rotor currents, input power, slip, rotor speed, and efficiency. We generated four faults in the induction motor: open circuit fault, short circuit fault, overload, and broken rotor bars. We collected a total of 150,000 data points with a 60-40% ratio of healthy to faulty motor data. We applied Fast Fourier Transform (FFT) to detect and identify healthy and unhealthy conditions and added a distinctive feature in our data. The generated dataset was trained different machine learning models. On comparing the accuracy of the models on the test set, we concluded that the Decision Tree algorithm performed the best with an accuracy of about 92%. Our study contributes to the literature by providing a valuable approach to fault detection and classification with machine learning models for industrial applications.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (0.89)
- Information Technology > Data Science > Data Quality > Data Transformation (0.88)
Advanced Machine Learning and Signal Processing
By enrolling in this course you agree to the End User License Agreement as set out in the FAQ. This course, Advanced Machine Learning and Signal Processing, is part of the IBM Advanced Data Science Specialization which IBM is currently creating and gives you easy access to the invaluable insights into Supervised and Unsupervised Machine Learning Models used by experts in many field relevant disciplines. We'll learn about the fundamentals of Linear Algebra to understand how machine learning modes work. Then we introduce the most popular Machine Learning Frameworks for python Scikit-Learn and SparkML. SparkML is making up the greatest portion of this course since scalability is key to address performance bottlenecks.
- Information Technology (0.79)
- Education > Educational Technology > Educational Software > Computer Based Training (0.44)
- Education > Educational Setting > Online (0.44)
Machine Learning and Signal Processing
Signal processing has given us a bag of tools that have been refined and put to very good use in the last fifty years. There is autocorrelation, convolution, Fourier and wavelet transforms, adaptive filtering via Least Mean Squares (LMS) or Recursive Least Squares (RLS), linear estimators, compressed sensing and gradient descent, to mention a few. Different tools are used to solve different problems, and sometimes, we use a combination of these tools to build a system to process signals. Machine Learning, or the deep neural networks, is much simpler to get used to because the underlying mathematics is fairly straightforward regardless of what network architecture we use. The complexity and the mystery of neural networks lie in the amount of data they process to get the fascinating results we currently have.
Do we still need Traditional Pattern Recognition Machine Learning and Signal Processing in the Age…
Deep learning is one of the most successful methods that we have seen in computer science in the last couple of years. Results indicate that many problems can be tackled with this method and amazing results are published every day. In fact, many traditional methods in pattern recognition seem obsolete. In the scientific community, lecturers in pattern recognition and signal processing discuss whether we need to redesign all of our classes as many methods do no longer reflect the state-of-the-art anymore. It seems that all of them are outperformed by methods based on deep learning.
Wearable tech uses machine learning and signal processing to provide data-driven mental health therapy
Often, when Feel detects an emotion, the app will ask users to describe what's happening and how they feel. That feedback serves three purposes: It helps the algorithm improve, it provides the therapist with richer information, and it prompts journaling, which brings greater self-insight. Chryssoula says the app "challenged me to be specific and analytical in the ways of improving myself, my negative thoughts, and my circling fears." The app might also suggest one of several exercises. For example, users might be asked to recall the key message from their last therapy session and describe how they plan to use that takeaway in their daily lives.
Next-generation Armv8.1-M architecture: Delivering enhanced machine learning and signal processing for the smallest embedded devices
The drive towards a world of a trillion connected devices is accelerating and will continue to do so, but only if we can find ways to efficiently expand the compute capabilities on a greater number of constrained devices at the far edge of the network. Increasing the compute capabilities in these devices will immediately open the door for developers to write machine learning (ML) applications directly for the device for decision-making at the source, thus enhancing data security while cutting down on network energy consumption, latency and bandwidth usage. To achieve this, we're introducing Arm Helium technology, the M-Profile Vector Extension (MVE) for the Arm Cortex-M series processors that will enhance the compute performance of the Armv8.1-M Helium will deliver up to 15x more ML performance and up to 5x uplift to signal processing for future Arm Cortex-M processors, unlocking new market opportunities for our partners where performance challenges have limited the use of low-cost and highly energy-efficient devices. Advanced digital signal processing (DSP) is available today through Arm Neon technology in richer Cortex-A based devices.
Why Isn't Voice Recognition Software More Accurate?
This is an excellent question to start off an automatic speech recognition (ASR) interview. I would slightly rephrase the question as "Why is speech recognition hard?" An ASR is just like any other machine learning (ML) problem, where the objective is to classify a sound wave into one of the basic units of speech (also called a "class" in ML terminology), such as a word. The problem with human speech is the huge amount of variation that occurs while pronouncing a word. For example, below are two recordings of the word "Yes" spoken by the same person (wave source: AN4 dataset [1]).